3 research outputs found

    Robust extraction of text from camera images using colour and spatial information simultaneously

    Get PDF
    The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction

    Performance analysis of airport ground lighting using computer vision techniques

    No full text
    In the presented research, mobile camera technology is used to assess the performance of airport ground lighting. In particular, automated assessment of the performance of luminaires in the approach lighting system CALS) within an airport landing lighting system is proposed. To do this a camera must be installed in the cockpit of an aircraft. As the aircraft approaches the runway, the camera records image data of the lighting pattern. Subsequent image analysis, using the computer vision techniques proposed here, will allow a performance metric to be determined for the luminaires. The performance metric will rank the luminaires based on their performance. This is very useful information towards the maintenance strategy for the airport and to date, is the only method available to evaluate the performance of luminaires in the ALS pattern. In order to determine the performance metric for the ALS, the luminaires must be identified in each image in which they appear or tracked throughout the complete acquired image sequence. For each identified lurninaire, the pixel information representing that luminaire must be extracted and stored. A measure of the pixel grey level per luminaire per image is then used to estimate a performance measure for each luminaire. In addition, since mobile camera technology is used, some of the captured images can become corrupted with the effects of vibration. The effect that this vibration will have on the pixel grey level information recorded for a given luminaire is not known. Therefore a methodology to quantify this effect is proposed. In addition, since the quality of captured images can vary with vibration, a reliability factor is also automatically determined for each acquired performance measurement. Using the performance measurement data luminaires are ranked in a list and this list is then reported to the maintenance team at a given airport.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore